36 research outputs found

    Ethically Aligned Design: An empirical evaluation of the RESOLVEDD-strategy in Software and Systems development context

    Full text link
    Use of artificial intelligence (AI) in human contexts calls for ethical considerations for the design and development of AI-based systems. However, little knowledge currently exists on how to provide useful and tangible tools that could help software developers and designers implement ethical considerations into practice. In this paper, we empirically evaluate a method that enables ethically aligned design in a decision-making process. Though this method, titled the RESOLVEDD-strategy, originates from the field of business ethics, it is being applied in other fields as well. We tested the RESOLVEDD-strategy in a multiple case study of five student projects where the use of ethical tools was given as one of the design requirements. A key finding from the study indicates that simply the presence of an ethical tool has an effect on ethical consideration, creating more responsibility even in instances where the use of the tool is not intrinsically motivated.Comment: This is the author's version of the work. The copyright holder's version can be found at https://doi.org/10.1109/SEAA.2019.0001

    ECCOLA -- a Method for Implementing Ethically Aligned AI Systems

    Full text link
    Various recent Artificial Intelligence (AI) system failures, some of which have made the global headlines, have highlighted issues in these systems. These failures have resulted in calls for more ethical AI systems that better take into account their effects on various stakeholders. However, implementing AI ethics into practice is still an on-going challenge. High-level guidelines for doing so exist, devised by governments and private organizations alike, but lack practicality for developers. To address this issue, in this paper, we present a method for implementing AI ethics. The method, ECCOLA, has been iteratively developed using a cyclical action design research approach. The method aims at making the high-level AI ethics principles more practical, making it possible for developers to more easily implement them in practice

    Continuous Software Engineering Practices in AI/ML Development Past the Narrow Lens of MLOps: Adoption Challenges

    Get PDF
    Background: Continuous software engineering practices are currently considered state of the art in Software Engineering (SE). Recently, this interest in continuous SE has extended to ML system development as well, primarily through MLOps. However, little is known about continuous SE in ML development outside the specific continuous practices present in MLOps. Aim: In this paper, we explored continuous SE in ML development more generally, outside the specific scope of MLOps. We sought to understand what challenges organizations face in adopting all the 13 continuous SE practices identified in existing literature. Method: We conducted a multiple case study of organizations developing ML systems. Data from the cases was collected through thematic interviews. The interview instrument focused on different aspects of continuous SE, as well as the use of relevant tools and methods. Results: We interviewed 8 ML experts from different organizations. Based on the data, we identified various challenges associated with the adoption of continuous SE practices in ML development. Our results are summarized through 7 key findings. Conclusion: The largest challenges we identified seem to stem from communication issues. ML experts seem to continue to work in silos, detached from both the rest of the project and the customers

    Time for AI (Ethics) maturity model is now

    Get PDF
    Publisher Copyright: Copyright © 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).There appears to be a common agreement that ethical concerns are of high importance when it comes to systems equipped with some sort of Artificial Intelligence (AI). Demands for ethical AI are declared from all directions. As a response, in recent years, public bodies, governments, and universities have rushed in to provide a set of principles to be considered when AI based systems are designed and used. We have learned, however, that high-level principles do not turn easily into actionable advice for practitioners. Hence, also companies are publishing their own ethical guidelines to guide their AI development. This paper argues that AI software is still software and needs to be approached from the software development perspective. The software engineering paradigm has introduced maturity model thinking, which provides a roadmap for companies to improve their performance from the selected viewpoints known as the key capabilities. We want to voice out a call for action for the development of a maturity model for AI software. We wish to discuss whether the focus should be on AI ethics or, more broadly, the quality of an AI system, called a maturity model for the development of AI systems.Peer reviewe

    Building a Maturity Model for Developing Ethically Aligned AI Systems

    Get PDF
    Ethical concerns related to Artificial Intelligence (AI) equipped systems are prompting demands for ethical AI from all directions. As a response, in recent years public bodies, governments, and companies have rushed to provide guidelines and principles for how AI-based systems are designed and used ethically. We have learned, however, that high-level principles and ethical guidelines cannot be easily converted into actionable advice for industrial organizations that develop AI-based information systems. Maturity models are commonly used in software and systems development companies as a roadmap for improving the performance. We argue that they could also be applied in the context of developing ethically aligned AI systems. In this paper, we propose a maturity model for AI ethics and explain how it can be devised by using a Design Science Research approach.©2021 Authors. Published by Association for Information Systems.fi=vertaisarvioitu|en=peerReviewed

    COVID-19 Remote Work: Body Stress, Self-Efficacy, Teamwork, and Perceived Productivity of Knowledge Workers

    Get PDF
    Due to COVID-19, companies were forced to adopt new work processes, and reduce modern work environments such as collaboration spaces. Pro-fessionals from many fields were forced to work remotely, almost over-night. Little is known about the impact of such non volunteer, long-term remote work on productivity, stress, and other key aspects of work perfor-mance. To further our understanding of the impacts of this situation and re-mote work in general, we conducted an exploratory study by studying 28 knowledge work professionals (researchers, software developers, interior designers, service designers, and development consultants) from the view-point of perceived productivity and aspects affecting it in this unusual set-ting. Early results showed the positive influence of self-efficacy and team-work on productivity during the remote work, while no moderating effect of measured physical stress on productivity either through the intrinsic or so-cial factor was present
    corecore